11 research outputs found
Priority Lists for Power System Investments: Locating Phasor Measurement Units
Power systems incrementally and continuously upgrade their components, such
as transmission lines, reactive capacitors, or generating units.
Decision-making tools often support the selection of the best set of components
to upgrade. Optimization models are often used to support decision making at a
given point in time. After certain time intervals, re-optimization is performed
to find new components to add. In this paper, we propose a decision-making
framework for incrementally updating power system components. This is an
alternative approach to the classical sequential re-optimization decision
making for an investment problem with modeled budget constraints. Our approach
provides a priority list as a solution with a list of new components to
upgrade. We show that i) our framework is consistent with the evolution of
power system upgrades, and ii) in particular circumstances, both frameworks
provide the same solution if the problem satisfies submodularity property. We
have selected the problem of phasor measurement unit localization and compared
the solution with the classical sequential re-optimization framework. For this
particular problem, we show that the two approaches provide close results,
while only our proposed algorithm is applicable in practice. The cases of 14
and 118 IEEE buses are used to illustrate the proposed methodology
Extended mathematical derivations for the decentralized loss minimization algorithm with the use of inverters
This document contains extended mathematical derivations for the
communication- and model-free loss minimization algorithm. The algorithm is
applied in the distribution grids and exploits the capabilities of the
inverters to control the reactive power output
GPU-Accelerated Verification of Machine Learning Models for Power Systems
Computational tools for rigorously verifying the performance of large-scale
machine learning (ML) models have progressed significantly in recent years. The
most successful solvers employ highly specialized, GPU-accelerated branch and
bound routines. Such tools are crucial for the successful deployment of machine
learning applications in safety-critical systems, such as power systems.
Despite their successes, however, barriers prevent out-of-the-box application
of these routines to power system problems. This paper addresses this issue in
two key ways. First, for the first time to our knowledge, we enable the
simultaneous verification of multiple verification problems (e.g., checking for
the violation of all line flow constraints simultaneously and not by solving
individual verification problems). For that, we introduce an exact
transformation that converts the "worst-case" violation across a set of
potential violations to a series of ReLU-based layers that augment the original
neural network. This allows verifiers to interpret them directly. Second, power
system ML models often must be verified to satisfy power flow constraints. We
propose a dualization procedure which encodes linear equality and inequality
constraints directly into the verification problem; and in a manner which is
mathematically consistent with the specialized verification tools. To
demonstrate these innovations, we verify problems associated with data-driven
security constrained DC-OPF solvers. We build and test our first set of
innovations using the -CROWN solver, and we benchmark against
Gurobi 10.0. Our contributions achieve a speedup that can exceed 100x and allow
higher degrees of verification flexibility
A series multi-step approach for operation Co-optimization of integrated power and natural gas systems
Power to gas units and gas turbines have provided considerable opportunities for bidirectional interdependency between electric power and natural gas infrastructures. This paper proposes a series of multi-step strategy with surrogate Lagrange relaxation for operation co-optimization of an integrated power and natural gas system. At first, the value of coordination capacity is considered as a contract to avoid dysfunction in each system. Then, the uncertainties and risks analysis associated with wind speed, solar radiation, and load fluctuation are implemented by generating stochastic scenarios. Finally, before employing surrogate Lagrange relaxation, the non-linear and non-convex gas flow constraint is linearized by two-dimension piecewise linearization. In the proposed procedure, constraints for energy storages and renewable energy sources are included. Two case studies are employed to verify the effectiveness of the proposed method. The surrogate Lagrange relaxation approach with coordination branch & cut method enhances the accuracy of convergence and can effectively reduce the decision-making time.</p
Interpretable machine learning for power systems : establishing confidence in SHapley Additive exPlanations
Interpretable Machine Learning (IML) is expected to remove significant barriers for the application of Machine Learning (ML) algorithms in power systems. This letter first seeks to showcase the benefits of SHapley Additive exPlanations (SHAP) for understanding the outcomes of ML models, which are increasingly being used. Second, we seek to demonstrate that SHAP explanations are able to capture the underlying physics of the power system. To do so, we demonstrate that the Power Transfer Distribution Factors (PTDF)—a physics-based linear sensitivity index—can be derived from the SHAP values. To do so, we take the derivatives of SHAP values from a ML model trained to learn line flows from generator power injections, using a simple DC power flow case in the 9-bus 3-generator test network. In demonstrating that SHAP values can be related back to the physics that underpin the power system, we build confidence in the explanations SHAP can offer
Decentralized Model-free Loss Minimization in Distribution Grids with the Use of Inverters
Distribution grids are experiencing a massive penetration of fluctuating
distributed energy resources (DERs). As a result, the real-time efficient and
secure operation of distribution grids becomes a paramount problem. While
installing smart sensors and enhancing communication infrastructure improves
grid observability, it is computationally impossible for the distribution
system operator (DSO) to optimize setpoints of millions of DER units. This
paper proposes communication-free and model-free algorithms that can actively
control converter-connected devices, and can operate either as stand-alone or
in combination with centralized optimization algorithms. We address the problem
of loss minimization in distribution grids, and we analytically prove that our
proposed algorithms reduce the total grid losses without any prior information
about the network, requiring no communication, and based only on local
measurements. Going a step further, we combine our proposed local algorithms
with a central optimization of a very limited number of converters. The hybrid
approaches we propose have much lower communication and computation requirement
than traditional methods, while they also provide performance guarantees in
case of communication failure. We demonstrate our algorithms in three networks
of varying sizes: a 5-bus network, an IEEE 141-bus system, and a real Danish
distribution system
Neural network interpretability for forecasting of aggregated renewable generation
With the rapid growth of renewable energy, lots of small photovoltaic (PV)
prosumers emerge. Due to the uncertainty of solar power generation, there is a
need for aggregated prosumers to predict solar power generation and whether
solar power generation will be larger than load. This paper presents two
interpretable neural networks to solve the problem: one binary classification
neural network and one regression neural network. The neural networks are built
using TensorFlow. The global feature importance and local feature contributions
are examined by three gradient-based methods: Integrated Gradients, Expected
Gradients, and DeepLIFT. Moreover, we detect abnormal cases when predictions
might fail by estimating the prediction uncertainty using Bayesian neural
networks. Neural networks, which are interpreted by gradient-based methods and
complemented with uncertainty estimation, provide robust and explainable
forecasting for decision-makers